在本文中,我们提出了一种一般稳健的子带自适应滤波(GR-SAF)方案,以防止冲动噪声,通过在随机步行模型下以各个重量不确定性最小化均方根偏差。具体而言,通过选择不同的缩放因子,例如在GR-SAF方案中从M-估计和最大correntropy robust标准中选择,我们可以轻松获得不同的GR-SAF算法。重要的是,所提出的GR-SAF算法可以简化为可变的正则化鲁棒归一化的SAF算法,从而具有快速的收敛速率和低稳态误差。在系统识别的背景下,用冲动噪声和回声取消进行双词的模拟已证实,所提出的GR-SAF算法的表现优于其对应物。
translated by 谷歌翻译
本文研究了在脉冲干扰和拜占庭式攻击的情况下,对聚类的多任务网络进行了扩散适应。我们根据Geman-McClure估计器使用的成本函数开发了强大的弹性扩散算法(RDLMG)算法,这可以降低对大异常值的敏感性,并使算法在冲动性的干扰下使算法可靠。此外,平均子序列减少的方法,其中每个节点丢弃了从其邻居那里获得的成本贡献的极端价值信息,可以使网络对拜占庭式攻击进行弹性。在这方面,提出的RDLMG算法可确保所有正常节点通过节点之间的合作融合到其理想状态。 RDLMG算法的统计分析也是根据平均和平均形式性能进行的。数值结果评估了用于多目标定位和多任务频谱传感的应用中提出的RDLMG算法。
translated by 谷歌翻译
先前的工作表明,深-RL可以应用于无地图导航,包括混合无人驾驶空中水下车辆(Huauvs)的中等过渡。本文介绍了基于最先进的演员批评算法的新方法,以解决Huauv的导航和中型过渡问题。我们表明,具有复发性神经网络的双重评论家Deep-RL可以使用仅范围数据和相对定位来改善Huauvs的导航性能。我们的深-RL方法通过通过不同的模拟场景对学习的扎实概括,实现了更好的导航和过渡能力,表现优于先前的方法。
translated by 谷歌翻译
深钢筋学习中的确定性和随机技术已成为改善运动控制和各种机器人的决策任务的有前途的解决方案。先前的工作表明,这些深-RL算法通常可以应用于一般的移动机器人的无MAP导航。但是,他们倾向于使用简单的传感策略,因为已经证明它们在高维状态空间(例如基于图像的传感的空间)方面的性能不佳。本文在执行移动机器人无地图导航的任务时,对两种深-RL技术 - 深确定性政策梯度(DDPG)和软参与者(SAC)进行了比较分析。我们的目标是通过展示神经网络体系结构如何影响学习本身的贡献,并根据每种方法的航空移动机器人导航的时间和距离提出定量结果。总体而言,我们对六个不同体系结构的分析强调了随机方法(SAC)更好地使用更深的体系结构,而恰恰相反发生在确定性方法(DDPG)中。
translated by 谷歌翻译
基于最大元素间间距(IES)约束(MISC)标准,提出了一种新型的稀疏阵列(SA)结构。与传统的MISC阵列相比,所提出的SA配置称为改进的MISC(IMISC),显着提高了均匀的自由度(UDOF)并减少了相互耦合。特别是,IMISC阵列由六个均匀的线性阵列(ULA)组成,可以由IES集确定。IES集受两个参数的约束,即最大IE和传感器数。也得出了IMISC阵列的UDOF,并且也分析了IMISC阵列的重量函数。拟议的IMISC阵列在对现有SAS的UDOF方面具有很大的优势,而它们的相互耦合保持低水平。进行模拟以证明IMISC阵列的优势。
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译
The well-documented presence of texture bias in modern convolutional neural networks has led to a plethora of algorithms that promote an emphasis on shape cues, often to support generalization to new domains. Yet, common datasets, benchmarks and general model selection strategies are missing, and there is no agreed, rigorous evaluation protocol. In this paper, we investigate difficulties and limitations when training networks with reduced texture bias. In particular, we also show that proper evaluation and meaningful comparisons between methods are not trivial. We introduce BiasBed, a testbed for texture- and style-biased training, including multiple datasets and a range of existing algorithms. It comes with an extensive evaluation protocol that includes rigorous hypothesis testing to gauge the significance of the results, despite the considerable training instability of some style bias methods. Our extensive experiments, shed new light on the need for careful, statistically founded evaluation protocols for style bias (and beyond). E.g., we find that some algorithms proposed in the literature do not significantly mitigate the impact of style bias at all. With the release of BiasBed, we hope to foster a common understanding of consistent and meaningful comparisons, and consequently faster progress towards learning methods free of texture bias. Code is available at https://github.com/D1noFuzi/BiasBed
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Fine-grained population maps are needed in several domains, like urban planning, environmental monitoring, public health, and humanitarian operations. Unfortunately, in many countries only aggregate census counts over large spatial units are collected, moreover, these are not always up-to-date. We present POMELO, a deep learning model that employs coarse census counts and open geodata to estimate fine-grained population maps with 100m ground sampling distance. Moreover, the model can also estimate population numbers when no census counts at all are available, by generalizing across countries. In a series of experiments for several countries in sub-Saharan Africa, the maps produced with POMELOare in good agreement with the most detailed available reference counts: disaggregation of coarse census counts reaches R2 values of 85-89%; unconstrained prediction in the absence of any counts reaches 48-69%.
translated by 谷歌翻译